library(tidyverse) # for graphing and data cleaning
library(gardenR) # for Lisa's garden data
library(lubridate) # for date manipulation
library(ggthemes) # for even more plotting themes
library(geofacet) # for special faceting with US map layout
theme_set(theme_minimal()) # My favorite ggplot() theme :)
# Lisa's garden data
data("garden_harvest")
# Seeds/plants (and other garden supply) costs
data("garden_spending")
# Planting dates and locations
data("garden_planting")
# Tidy Tuesday data
kids <- readr::read_csv('https://raw.githubusercontent.com/rfordatascience/tidytuesday/master/data/2020/2020-09-15/kids.csv')
Before starting your assignment, you need to get yourself set up on GitHub and make sure GitHub is connected to R Studio. To do that, you should read the instruction (through the “Cloning a repo” section) and watch the video here. Then, do the following (if you get stuck on a step, don’t worry, I will help! You can always get started on the homework and we can figure out the GitHub piece later):
keep_md: TRUE in the YAML heading. The .md file is a markdown (NOT R Markdown) file that is an interim step to creating the html file. They are displayed fairly nicely in GitHub, so we want to keep it and look at it there. Click the boxes next to these two files, commit changes (remember to include a commit message), and push them (green up arrow).Put your name at the top of the document.
For ALL graphs, you should include appropriate labels.
Feel free to change the default theme, which I currently have set to theme_minimal().
Use good coding practice. Read the short sections on good code with pipes and ggplot2. This is part of your grade!
When you are finished with ALL the exercises, uncomment the options at the top so your document looks nicer. Don’t do it before then, or else you might miss some important warnings and messages.
These exercises will reiterate what you learned in the “Expanding the data wrangling toolkit” tutorial. If you haven’t gone through the tutorial yet, you should do that first.
garden_harvest data to find the total harvest weight in pounds for each vegetable and day of week (HINT: use the wday() function from lubridate). Display the results so that the vegetables are rows but the days of the week are columns.garden_harvest %>%
mutate(weekday = wday(date, label = TRUE)) %>%
group_by(vegetable,weekday) %>%
summarize(tot_harvest_lb = sum(weight) * 0.00220462) %>%
pivot_wider(names_from = weekday, values_from = tot_harvest_lb)
garden_harvest data to find the total harvest in pound for each vegetable variety and then try adding the plot from the garden_planting table. This will not turn out perfectly. What is the problem? How might you fix it?garden_harvest %>%
group_by(vegetable,variety) %>%
summarize(tot_harvest_lb = sum(weight) * 0.00220462) %>%
left_join(garden_planting, by = c("vegetable","variety"))
The issue with this data is that some varieties of vegetables were planted in multiple plots, so their total harvest in pounds appears in multiple rows. To fix this, we could just set each variety to one of the plots (the first one that appears in garden_planting, for example).
garden_harvest and garden_spending datasets, along with data from somewhere like this to answer this question. You can answer this in words, referencing various join functions. You don’t need R code but could provide some if it’s helpful.In order to figure this out, you could group by vegetable and create a new summarized column representing the total weight of each vegetable. Then, you could compute how much buying that amount of veggies would cost from somewhere like the link provided. After this, it would be relatively easy to compare that total price with the total price that you spent on the garden from the garden_spending data.
garden_harvest %>%
filter(vegetable == "tomatoes") %>%
group_by(variety) %>%
summarize(first_harvest = min(date), total_harvest_lb = sum(weight) * 0.00220462) %>%
ggplot(aes(x = total_harvest_lb, y = fct_reorder(variety, first_harvest))) +
geom_col() +
labs(x = "Weight (lb)", y = "", title = "Tomato Varieties Harvests")
garden_harvest data, create two new variables: one that makes the varieties lowercase and another that finds the length of the variety name. Arrange the data by vegetable and length of variety name (smallest to largest), with one row for each vegetable variety. HINT: use str_to_lower(), str_length(), and distinct().garden_harvest %>%
mutate(lowercase = str_to_lower(variety), variety_name_length = str_length(variety)) %>%
group_by(vegetable, variety) %>%
summarize(total_weight=sum(weight),lowercase=lowercase,variety_name_length=variety_name_length) %>%
distinct() %>%
arrange(vegetable, variety_name_length)
garden_harvest data, find all distinct vegetable varieties that have “er” or “ar” in their name. HINT: str_detect() with an “or” statement (use the | for “or”) and distinct().garden_harvest %>%
filter(str_detect(variety, "(e|a)r")) %>%
select(vegetable, variety) %>%
distinct()
In this activity, you’ll examine some factors that may influence the use of bicycles in a bike-renting program. The data come from Washington, DC and cover the last quarter of 2014.
{300px}
{300px}
Two data tables are available:
Trips contains records of individual rentalsStations gives the locations of the bike rental stationsHere is the code to read in the data. We do this a little differently than usualy, which is why it is included here rather than at the top of this file. To avoid repeatedly re-reading the files, start the data import chunk with {r cache = TRUE} rather than the usual {r}.
data_site <-
"https://www.macalester.edu/~dshuman1/data/112/2014-Q4-Trips-History-Data.rds"
Trips <- readRDS(gzcon(url(data_site)))
Stations<-read_csv("http://www.macalester.edu/~dshuman1/data/112/DC-Stations.csv")
NOTE: The Trips data table is a random subset of 10,000 trips from the full quarterly data. Start with this small data table to develop your analysis commands. When you have this working well, you should access the full data set of more than 600,000 events by removing -Small from the name of the data_site.
It’s natural to expect that bikes are rented more at some times of day, some days of the week, some months of the year than others. The variable sdate gives the time (including the date) that the rental started. Make the following plots and interpret them:
sdate. Use geom_density().Trips %>%
ggplot(aes(sdate)) +
geom_density() +
labs(x = "date", y = "", title = "Amount of rentals over time")
mutate() with lubridate’s hour() and minute() functions to extract the hour of the day and minute within the hour from sdate. Hint: A minute is 1/60 of an hour, so create a variable where 3:30 is 3.5 and 3:45 is 3.75.Trips %>%
mutate(time = hour(sdate) + minute(sdate)/60) %>%
ggplot(aes(time)) +
geom_density() +
labs(title = "Amount of rentals over a day", y="")
Trips %>%
mutate(weekday = wday(sdate, label = TRUE, abbr = TRUE)) %>%
ggplot(aes(y = weekday)) +
geom_bar() +
labs(title ="Amount of rentals")
Trips %>%
mutate(time = hour(sdate) + minute(sdate)/60, weekday = wday(sdate, label=TRUE,abbr=TRUE)) %>%
ggplot(aes(time)) +
geom_density() +
facet_wrap(~weekday) +
labs(title="Daily rentals")
On weekdays, there are two spikes, around 8am and 6pm, while between these times there is a dip, probably because people are going to work. On the weekends, however, there is a singular curve which is a smoother/more gradual spike, which reaches its peak around 2pm.
The variable client describes whether the renter is a regular user (level Registered) or has not joined the bike-rental organization (Causal). The next set of exercises investigate whether these two different categories of users show different rental behavior and how client interacts with the patterns you found in the previous exercises.
fill aesthetic for geom_density() to the client variable. You should also set alpha = .5 for transparency and color=NA to suppress the outline of the density function.Trips %>%
mutate(time = hour(sdate) + minute(sdate)/60, weekday = wday(sdate, label=TRUE,abbr=TRUE)) %>%
ggplot() +
geom_density(aes(x = time, fill = client), alpha = .5, color = NA) +
facet_wrap(~weekday) +
labs(title="Daily rentals")
The casual clients follow what I previously noticed as the pattern in weekends, a singular more gradual spike, on the weekdays as well as weekends.
position = position_stack() to geom_density(). In your opinion, is this better or worse in terms of telling a story? What are the advantages/disadvantages of each?Trips %>%
mutate(time = hour(sdate) + minute(sdate)/60, weekday = wday(sdate, label=TRUE,abbr=TRUE)) %>%
ggplot() +
geom_density(aes(x = time, fill = client), alpha = .5, color = NA, position = position_stack()) +
facet_wrap(~weekday) +
labs(title="Daily rentals")
I think that the original way is a better representation. This second way is a little misleading because you can’t see how different the patterns are since the pink is like a combination of the casual and registered patterns whereas in the first version it is very clear that the patterns are completely different.
position = position_stack()). Add a new variable to the dataset called weekend which will be “weekend” if the day is Saturday or Sunday and “weekday” otherwise (HINT: use the ifelse() function and the wday() function from lubridate). Then, update the graph from the previous problem by faceting on the new weekend variable.Trips %>%
mutate(time = hour(sdate) + minute(sdate)/60, weekday = wday(sdate, label=TRUE,abbr=TRUE), weekend = ifelse((weekday == "Sat" | weekday == "Sun"), "weekend","weekday")) %>%
ggplot() +
geom_density(aes(x = time, fill = client), alpha = .5, color = NA) +
facet_wrap(~weekend, ncol = 1) +
labs(title="Bike Rentals")
client and fill with weekday. What information does this graph tell you that the previous didn’t? Is one graph better than the other?Trips %>%
mutate(time = hour(sdate) + minute(sdate)/60, weekday = wday(sdate, label=TRUE,abbr=TRUE), weekend = ifelse((weekday == "Sat" | weekday == "Sun"), "weekend","weekday")) %>%
ggplot() +
geom_density(aes(x = time, fill = weekend), alpha = .5, color = NA) +
facet_wrap(~client, ncol = 1) +
labs(title="Bike Rentals")
In this graph, it is easier to compare the behavior of registered clients on different days and casual clients on different days where the last graph was better for comparing the behavior of the different types of clients on either weekends or weekdays. I don’t think either is better, but they might be better suited depending on the kind of analysis you wanted to do. For example, on this second graph we can see that casual clients have a slightly higher peak on weekends than on weekdays.
Stations to make a visualization of the total number of departures from each station in the Trips data. Use either color or size to show the variation in number of departures. We will improve this plot next week when we learn about maps!Stations2 <- Stations %>%
mutate(key = name)
Trips2 <- Trips %>%
mutate(key = sstation)
Trips2 %>%
left_join(Stations2, by = "key") %>%
arrange(sstation) %>%
group_by(sstation) %>%
summarize(num_departures = n(), lat = lat, long=long) %>%
distinct() %>%
ggplot(aes(x=lat,y=long,color=num_departures)) +
geom_point() +
labs(x="latitude",y="longitude",color="Number of Departures")
Stations2 <- Stations %>%
mutate(key = name)
Trips2 <- Trips %>%
mutate(key = sstation)
Trips2 %>%
left_join(Stations2, by = "key") %>%
arrange(sstation) %>%
mutate(casual = ifelse(client == "Casual", 1, 0)) %>%
group_by(sstation) %>%
summarize(percent_casual = sum(casual)/n(), lat=lat, long=long) %>%
distinct() %>%
ggplot(aes(x=lat,y=long,color=percent_casual)) +
geom_point() +
labs(x="latitude",y="longitude",color="Percent of Casual Clients")
as_date(sdate) converts sdate from date-time format to date format.top_10 <- Trips %>%
mutate(date = as_date(sdate)) %>%
group_by(sstation, date) %>%
summarize(num_departures = n()) %>%
arrange(desc(num_departures)) %>%
head(10)
top_10
Trips %>%
mutate(date = as_date(sdate)) %>%
semi_join(top_10, by = c("sstation"="sstation","date"="date"))
Trips %>%
mutate(date = as_date(sdate)) %>%
semi_join(top_10, by = c("sstation"="sstation","date"="date")) %>%
mutate(weekday = wday(date, label = TRUE)) %>%
group_by(client,weekday) %>%
summarize(count = n()) %>%
mutate(total_for_client_type = sum(count)) %>%
ungroup() %>%
group_by(client, weekday) %>%
summarize(proportion = count/total_for_client_type) %>%
pivot_wider(names_from = weekday, values_from = proportion)
The registered clients have a much lower proportion for the weekend days, while the weekend days are some of the highest for casual clients, especially Saturday. The highest proportional days for registered clients were Wednesday and Thursday. The highest proportional day for casual clients was Saturday, by about 20%, and then followed by Sunday and Thursday.
DID YOU REMEMBER TO GO BACK AND CHANGE THIS SET OF EXERCISES TO THE LARGER DATASET? IF NOT, DO THAT NOW.
https://github.com/ocoing05/data-science-weekly-exercise-3 https://github.com/ocoing05/data-science-weekly-exercise-3/blob/main/03_exercises.md
This problem uses the data from the Tidy Tuesday competition this week, kids. If you need to refresh your memory on the data, read about it here.
facet_geo(). The graphic won’t load below since it came from a location on my computer. So, you’ll have to reference the original html on the moodle page to see it.DID YOU REMEMBER TO UNCOMMENT THE OPTIONS AT THE TOP?